Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Abstract The recent development and use of generative AI (GenAI) has signaled a significant shift in research activities such as brainstorming, proposal writing, dissemination, and even reviewing. This has raised questions about how to balance the seemingly productive uses of GenAI with ethical concerns such as authorship and copyright issues, use of biased training data, lack of transparency, and impact on user privacy. To address these concerns, many Higher Education Institutions (HEIs) have released institutional guidance for researchers. To better understand the guidance that is being provided we report findings from a thematic analysis of guidelines from thirty HEIs in the United States that are classified as R1 or “very high research activity.” We found that guidance provided to researchers: (1) asks them to refer to external sources of information such as funding agencies and publishers to keep updated and use institutional resources for training and education; (2) asks them to understand and learn about specific GenAI attributes that shape research such as predictive modeling, knowledge cutoff date, data provenance, and model limitations, and educate themselves about ethical concerns such as authorship, attribution, privacy, and intellectual property issues; and (3) includes instructions on how to acknowledge sources and disclose the use of GenAI, how to communicate effectively about their GenAI use, and alerts researchers to long term implications such as over reliance on GenAI, legal consequences, and risks to their institutions from GenAI use. Overall, guidance places the onus of compliance on individual researchers making them accountable for any lapses, thereby increasing their responsibility.more » « less
- 
            The release of ChatGPT in November 2022 prompted a massive uptake of generative artificial intelligence (GenAI) across higher education institutions (HEIs). In response, HEIs focused on regulating its use, particularly among students, before shifting towards advocating for its productive integration within teaching and learning. Since then, many HEIs have increasingly provided policies and guidelines to direct GenAI. This paper presents an analysis of documents produced by 116 US universities classified as as high research activity or R1 institutions providing a comprehensive examination of the advice and guidance offered by institutional stakeholders about GenAI. Through an extensive analysis, we found a majority of universities (N = 73, 63%) encourage the use of GenAI, with many offering detailed guidance for its use in the classroom (N = 48, 41%). Over half the institutions provided sample syllabi (N = 65, 56%) and half (N = 58, 50%) provided sample GenAI curriculum and activities that would help instructors integrate and leverage GenAI in their teaching. Notably, the majority of guidance focused on writing activities focused on writing, whereas references to code and STEM-related activities were infrequent, and often vague, even when mentioned (N = 58, 50%). Based on our findings we caution that guidance for faculty can become burdensome as policies suggest or imply substantial revisions to existing pedagogical practices.more » « lessFree, publicly-accessible full text available March 1, 2026
- 
            Since the release of ChatGPT in 2022, Generative AI (GenAI) is increasingly being used in higher education computing classrooms across the United States. While scholars have looked at overall institutional guidance for the use of GenAI and reports have documented the response from schools in the form of broad guidance to instructors, we do not know what policies and practices instructors are actually adopting and how they are being communicated to students through course syllabi. To study instructors' policy guidance, we collected 98 computing course syllabi from 54 R1 institutions in the U.S. and studied the GenAI policies they adopted and the surrounding discourse. Our analysis shows that 1) most instructions related to GenAI use were as part of the academic integrity policy for the course and 2) most syllabi prohibited or restricted GenAI use, often warning students about the broader implications of using GenAI, e.g. lack of veracity, privacy risks, and hindering learning. Beyond this, there was wide variation in how instructors approached GenAI including a focus on how to cite GenAI use, conceptualizing GenAI as an assistant, often in an anthropomorphic manner, and mentioning specific GenAI tools for use. We discuss the implications of our findings and conclude with current best practices for instructors.more » « lessFree, publicly-accessible full text available February 12, 2026
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
